
Deep Learning-Based Imputation on the Facemocap Dataset for Facial Movement Analysis
Please login to view abstract download link
Accurate facial movement analysis is vital in medical technologies, but existing point cloud datasets focus on static expressions. We introduce FaceMoCap, a marker-based point cloud dataset capturing dynamic facial movements from healthy individuals and facial palsy patients. Movements were recorded by tracking 105 reflective markers with an optical-passive motion capture system. The dataset includes 30 healthy adults (11 males, 19 females, aged 20–31) and 2 female patients with facial palsy (aged 44 and 62). Personalized 3D-printed masks positioned markers adjacent to key facial muscles and nerves, and a rigid dental support eliminated head movement. Participants performed five facial movements as per [1]: gentle and forced eyelid closures, labial protrusion on "o" and "μou" sounds, and wide smile. To address some missing 3D points in the dataset, we implemented a pipeline with a denoising strategy for imputation, leveraging consistent point clouds. The pipeline includes preprocessing, model training (Multi-Layer Perceptron, Autoencoder, Graph Neural Network), evaluation, and imputation using the best model. The Autoencoder performed best, achieving the lowest mean squared error of 0.0003, effectively preserving the original point clouds. FaceMoCap, combined with imputation, offers resources for applications like quantitative rehabilitation and anomaly detection in bio-engineering and machine learning frameworks. By offering dynamic point cloud data and deep learning techniaues, FaceMoCap advances facial movement analysis for clinical and research purposes.